Goto

Collaborating Authors

 Arguments Against AI


Vampire: The Masquerade Bloodlines 2 review – an interestingly toothless piece of noir fiction

The Guardian

'A 25-hour story that just about makes sense' Vampire: The Masquerade Bloodlines 2. 'A 25-hour story that just about makes sense' Vampire: The Masquerade Bloodlines 2. Y ou are an ancient and powerful vampire, and you wake up in the basement of some decrepit Seattle building, with no recent memories and a strange sigil on your hand. The first thing you do is feed on the cop who finds you, before smacking his partner into a wall so hard that his blood spatters the brick. A violent fanged rampage ensues, where you beat up and tear apart rival undead and their ghouls while currying the favour of the local court of vampires, and trying to keep your existence hidden from the mortal populace of this sultry city. But this is also a detective story: there's a younger night-stalker sharing your brain, a voice in your head named Fabian, who talks like a 1920s gumshoe (presumably because he once was one). Fabian isn't violent at all; he evidently works with the human police and the vampire underworld, snacking on consenting volunteers' blood and using his mind-delving powers to solve murders.


The Chinese Room Argument: Ray Kurzweil vs. John Searle

#artificialintelligence

" 'When we hear it said that wireless valves think,' [Sir Geoffrey] Jefferson said, 'we may despair of language.' But no cybernetician had said the valves thought, no more than anyone would say that the nerve-cells thought. It was the system as a whole that'thought', in Alan's [Turing] view…" -- Andrew Hodges (from his book Alan Turing: the Enigma). In his rewarding book, How to Create a Mind, Ray Kurzweil tackles John Searle's Chinese room argument. That said, I do find its philosophical sections somewhat naïve. Of course there's no reason why a "world-renowned inventor, thinker and futurist" should also be an accomplished philosopher.


Blippar Launches Free to Use WebAR SDK Tool

#artificialintelligence

Leading augmented reality (AR) technology company Blippar has confirmed its commitment to putting power in the hands of creators with the launch of its WebAR SDK technology. The toolkit will empower AR creators to build their own immersive WebAR experiences from the ground up using HTML and Java coding. WebAR SDK users will have access to full 24/7 support from the Blippar team to help hone their creative campaigns, and, during its beta phase, the platform will be entirely free to use, create, and publish from – with its immersive WebAR experiences able to be accessed and shared across platforms including browsers, Facebook, TikTok, WeChat, and WhatsApp – a further step in ensuring access to AR creativity is available to everyone. Blippar's WebAR SDK includes its most advanced implementation of simultaneous location and mapping (SLAM) to date, boasting 99% accuracy on tracking when locked, with less than a 1% margin of error in angular accuracy. SLAM is a set of computer vision technologies that allow AR developers and creatives to build much more interactive, immersive, and realistic AR experiences by using the device camera to create a mesh of the user's surroundings that includes floors, walls, ceilings, and other objects.


Chicken Little AI Dystopians: Is the Sky Really Falling?

#artificialintelligence

He finds ridiculous that some think "superintelligence itself is impossible." The author is astonishingly wrong in thinking computers can equal and surpass humans in mental performance. He seems unaware of limitations of AI as mathematically argued by Nobel laureate Roger Penrose decades ago in "The Emperor's New Mind." Or Searle's Chinese room argument that convincingly demonstrates computers, and thus AI, will never understand what they do. Or Selmer Bringsjord's Lovelace test that a computer must pass to demonstrate creativity.


Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding

#artificialintelligence

Searle first proposed the Chinese room experiment in 1980. At the time, artificial intelligence researchers, who have always been prone to mood swings, were cocky. Some claimed that machines would soon pass the Turing test, a means of determining whether a machine "thinks." Computer pioneer Alan Turing proposed in 1950 that questions be fed to a machine and a human. If we cannot distinguish the machine's answers from the human's, then we must grant that the machine does indeed think.


The AI Game of Thrones

#artificialintelligence

The AI field is plagued by irrational optimism and irrational despair. In 1973, Sir James Lighthill was asked to compile a report on the then-present state of artificial intelligence. His report criticized the hype surrounding artificial intelligence research, suggesting that AI's best algorithms would always fail at solving real world problems and could really only work for solving "baby" problems. His report followed almost twenty-five years of fervent research into human-like algorithms. The AI "summer" between the 1950s and 1970s saw DARPA investing millions into undirected research that touched on natural language processing.


Sir Roger Penrose: The man who proved black holes weren't 'impossible'

BBC News

This was the cauldron into which Sir Roger jumped when he started applying some of the principles trained up in topology - a mathematical concept describing the properties of geometric objects as they are twisted or stretched - to black holes. Before his seminal 1965 paper, models could describe how these objects might form but they were often dismissed as being idealised situations with perfect symmetry that would be unlikely to occur in the "real world".

  Technology: Information Technology > Artificial Intelligence > Issues > Arguments Against AI (0.40)

'Extremely odd physics' of black holes could allow them to be used to create energy, scientists say

The Independent - Tech

Black holes could be harnessed for energy, scientists have said. The claim comes after researchers produced an experiment they claim verified a decades-old theory that such black holes could create energy as a result of "extremely odd physics". Scientists at the University of Glasgow's School of Physics and Astronomy set out to validate Roger Penrose's 1969 work. They used sound waves in an attempt to endorse the "extremely odd physics a half-century after the theory was first proposed". British physicist Mr Penrose theorised that energy could be created by dropping objects such as a rocket into a black hole and splitting the object in two.

  Country: Europe > United Kingdom > Scotland (0.06)
  Genre: Research Report (0.37)
  Technology: Information Technology > Artificial Intelligence > Issues > Arguments Against AI (0.62)

Lighthill Report

#artificialintelligence

The Science Research Council has been receiving an increasing number of applications for research support in the rather broad field with mathematical, engineering and biological aspects which often goes under the general description Artificial Intelligence (AI). The research support applied for is sufficient in volume, and in variety of discipline involved, to demand that a general view of the field be taken by the Council itself. In forming such a view the Council has available to it a great deal of specialist information through its structure of Boards and Committees; particularly from the Engineering Board and its Computing Science Committee and from the Science Board and its Biological Sciences Committee. These include specialised reports on the contribution of AI to practical aims on the one hand and to basic neurobiology on the other, as well as a large volume of detailed recommendations on grant applications. To supplement the important mass of specialist and detailed information available to the Science Research Council, its Chairman decided to commission an independent report by someone outside the AI field but with substantial general experience of research work in multidisciplinary fields including fields with mathematical, engineering and biological aspects. I undertook to make such an independent report, on the understanding that it would simply describe how AI appears to a lay person after two months spent looking through the literature of the subject and discussing it orally and by letter with a variety of workers in the field and in closely related areas of research. Such a personal view of the subject might be helpful to other lay persons such as Council members in the process of preparing to study specialist reports and recommendations and working towards detailed policy formation and decision taking. The report which follows must certainly not be viewed as more than such a highly personal view of the AI field. The author is grateful for the large amount of help and advice readily given in reply to his many requests. He must emphasize, however, that none but himself is responsible for the opinions expressed in this report. They represent mere!y the broad overall view of the subject which he reached after such limited studies as he was able to make in the course of two months. Readers might possibly have expected that the report would include a summary, but the author decided against this partly because considerable material is summarised already in almost every paragraph.


Chinese room - Wikipedia

#artificialintelligence

The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since.[1] The centerpiece of the argument is a thought experiment known as the Chinese room.[2] The argument is directed against the philosophical positions of functionalism and computationalism,[3] which hold that the mind may be viewed as an information-processing system operating on formal symbols. The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[b] Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display.[4] The argument applies only to digital computers running programs and does not apply to machines in general.[5] Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.